Tuning Parameter Selection for the Adaptive Lasso using ERIC
نویسندگان
چکیده
10 The adaptive lasso is a commonly applied penalty for variable selection in regression mod11 eling. Like all penalties though, its performance depends critically on the choice of tuning 12 parameter. One method for choosing the tuning parameter is via information criteria, such as 13 those based on AIC and BIC. However, these criteria were developed for use with unpenal14 ized maximum likelihood estimators, and it is not clear that they take into account the effects 15 of penalization. In this article, we propose the Extended Regularized Information Criterion 16 (ERIC) for choosing the tuning parameter in adaptive lasso regression. ERIC extends the BIC 17 to account for the effect of applying the adaptive lasso on the bias-variance tradeoff. This leads 18 to a criterion whose penalty for model complexity is itself a function of the tuning parameter. 19 We show the tuning parameter chosen by ERIC is selection consistent when the number 20 of variables grows with sample size, and that this consistency holds in a wider range of con21 texts compared to using BIC to choose the tuning parameter. Simulation show that ERIC can 22 significantly outperform BIC and other information criteria proposed (for choosing the tuning 23 parameter) in selecting the true model. For ultra high-dimensional data (p > n), we consider 24 ∗Corresponding author: [email protected]; School of Mathematics and Statistics, The University of New South Wales, 2052, Sydney, Australia 1 ACCEPTED MANUSCRIPT D ow nl oa de d by [ U ni ve rs ity N or th C ar ol in a C ha pe l H ill ] at 0 7: 03 0 4 Se pt em be r 20 14
منابع مشابه
Self-adaptive Lasso and its Bayesian Estimation
In this paper, we proposed a self-adaptive lasso method for variable selection in regression problems. Unlike the popular lasso method, the proposed method introduces a specific tuning parameter for each regression coefficient. We modeled self-adaptive lasso in a Bayesian framework and developed an efficient Gibbs sampling algorithm to automatically select these tuning parameters and estimate t...
متن کاملOn the distribution of the adaptive LASSO estimator
We study the distribution of the adaptive LASSO estimator (Zou (2006)) in finite samples as well as in the large-sample limit. The largesample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where the tuning results in consistent model selection. We show that the finite-sample as well as the ...
متن کاملTuning Parameter Selection for Penalized Likelihood Estimation of Inverse Covariance Matrix
In a Gaussian graphical model, the conditional independence between two variables are characterized by the corresponding zero entries in the inverse covariance matrix. Maximum likelihood method using the smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006) have been proposed in literature. In this article, we establish the result that ...
متن کاملOutcomes of the Equivalence of Adaptive Ridge with Least Absolute Shrinkage
Adaptive Ridge is a special form of Ridge regression, balancing the quadratic penalization on each parameter of the model. It was shown to be equivalent to Lasso (least absolute shrinkage and selection operator), in the sense that both procedures produce the same estimate. Lasso can thus be viewed as a particular quadratic penalizer. From this observation, we derive a fixed point algorithm to c...
متن کاملVariable Inclusion and Shrinkage Algorithms
The Lasso is a popular and computationally efficient procedure for automatically performing both variable selection and coefficient shrinkage on linear regression models. One limitation of the Lasso is that the same tuning parameter is used for both variable selection and shrinkage. As a result, it typically ends up selecting a model with too many variables to prevent over shrinkage of the regr...
متن کامل